66 research outputs found
Recommended from our members
Mapping the Klangdom Live: Cartographies for piano with two performers and electronics
The use of high-density loudspeaker arrays (HDLAs) has recently experienced rapid growth in a wide variety of technical and aesthetic approaches. Still less explored, however, are applications to interactive music with live acoustic instruments. How can immersive spatialization accompany an instrument already with its own rich spatial diffusion pattern, like the grand piano, in the context of a score-based concert work? Potential models include treating the spatialized electronic sound in analogy to the diffusion pattern of the instrument, with spatial dimensions parametrized as functions of timbral features. Another approach is to map the concert hall as a three-dimensional projection of the instrument’s internal physical layout, a kind of virtual sonic microscope. Or, the diffusion of electronic spatial sound can be treated as an independent polyphonic element, complementary to but not dependent upon the instrument’s own spatial characteristics. Cartographies (2014), for piano with two performers and electronics, explores each of these models individually and in combination, as well as their technical implementation with the Meyer Sound Matrix3 system of the Su ̈ dwestrundfunk Experimentalstudio in Freiburg, Germany, and the 43.4-channel Klangdom of the Institut fu ̈ r Musik und Akustik at the Zentrum fu ̈ r Kunst und Media in Karlsruhe, Germany. The process of composing, producing, and performing the work raises intriguing questions, and invaluable hints, for the composition and performance of live interactive works with HDLAs in the future
Recommended from our members
Corpus-Based Transcription as an Approach to the Compositional Control of Timbre
Timbre space is a cognitive model useful to address the problem of structuring timbre in electronic music. The recent concept of corpus-based concatenative sound synthesis is proposed as an approach to timbral control in both real- and deferred-time applications. Using CataRT and related tools in the FTM and Gabor libraries for Max/MSP we describe a technique for real-time analysis of a live signal to pilot corpus-based synthesis, along with examples of compositional realizations in works for instruments, electronics, and sound installation. To extend this technique to computer-assisted composition for acoustic instruments, we develop tools using the Sound Description Interchange Format (SDIF) to export sonic descriptors to OpenMusic where they may be further manipulated and transcribed into an instrumental score. This presents a flexible technique for the compositional organization of noise-based instrumental sounds
Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm
CATORACLE responds to the need to join high-level control of audio timbre with the organization of musical form in time. It is inspired by two powerful existing tools: CataRT for corpus-based concatenative synthesis based on the MUBU for MAX library, and PYORACLE for computer improvisation, combining for the first time audio descriptor analysis and learning and generation of musical structures. Harnessing a user-defined list of audio fea- tures, live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the BACH library for MAX. The project suggests applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis
Recommended from our members
Musique instrumentale concrète: Timbral transcription in What the Blind See and Without Words
Transcription is an increasingly influential compositional model in the 21 st century. Bridging techniques of musique concrète and musique concrète instrumentale, my work since 2007 has focused on using timbral descriptors to transcribe audio recordings for live instrumental ensemble and electronics. The sources and results vary, including transformation of noise-rich playing techniques, transcription of improvised material produced by performer-collaborators, and fusion of instrumental textures with ambient field recordings. However the technical implementation employs a shared toolkit: sample databases are recorded, analysed, and organised into an audio mosaic with the CataRT package for corpus-based concatenative synthesis. Then OpenMusic is used to produce a corresponding instrumental transcription to be incorporated into the finished score. This chapter presents the approach in two works for ensemble and electronics, What the Blind See (2009) and Without Words (2012), as well as complementary real-time technologies including close miking and live audio mosaicking. In the process transcription is considered as a renewed expressive resource for the extended lexicon of electronically augmented instrumental sound
Recommended from our members
Spherical correlation as a similarity measure for 3D radiation patterns of musical instruments
This work is part of an artistic-research residency where composer Aaron Einbond seeks to apply audio descriptor analysis and corpus-based synthesis techniques to the spatial manipulation of instrumental radiation patterns for projection with a compact spherical loudspeaker array. Starting from a database of 3D directivity patterns of orchestral instruments, measured with spherical microphone arrays in anechoic conditions, we wish to derive spatial descriptors in order to classify the corpus. This paper investigates the use of spherical cross-correlation as a similarity measure between radiation patterns. Considering two directivity patterns f and g as bandlimited, square integrable functions on the 2-sphere, their correlation can be computed from their spherical harmonic spectra via a spatial inverse discrete Fourier transform. The magnitudes of these Fourier coefficients provide a rotation-invariant representation of the functions on the sphere. One can therefore search for the transformation matrix m, in the 3D rotation group SO(3), which maximizes the cross-correlation, i.e. which offers the optimal spherical shape matching between f and g. The mathematical foundations of these tools are well established in the literature ; however, their practical use in the field of acoustics remains limited and challenging. In this study, we apply these techniques to both simulated and measured radiation data, attempting to answer a number of practical questions : How does the similarity measure behave when f and g are not rotated cousins ? How can we adapt the cross-correlation formalism established for complex-valued harmonics to real-valued harmonics, as the latter are predominantly used in the field of Ambisonics ? Can we compute the correlation of spherical spectra of different bandwidths ? What is the impact of the finite sampling distribution used for integration on the SO(3) space? How do we normalize the cross-correlation function ? And most importantly, is the cross-correlation an efficient measure for the classification of 3D radiation patterns
Recommended from our members
Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning
In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through auto-ethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners, to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to – as well as differences from – existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage
Recommended from our members
Fine-tuned Control of Concatenative Synthesis with CATART Using the BACH Library for MAX
The electronic musician’s toolkit is increasingly characterized by fluidity between software, techniques, and genres. By combining two of the most exciting recent packages for MAX, CATART corpus-based concatenative synthesis (CBCS) and BACH: AUTOMATED COMPOSER’S HELPER, we propose a rich tool for real-time creation, storage, editing, re-synthesis, and transcription of concatenative sound. The modular structures of both packages can be advantageously recombined to exploit the best of their real-time and computer-assisted composition (CAC) capabilities. After loading a sample corpus in CATART, each grain, or unit, played from CATART is stored as a notehead in the bach.roll object along with its descriptor data and granular synthesis parameters including envelope and spatialization. The data is attached to the note itself (pitch, velocity, duration) or stored in user-defined slots than can be adjusted by hand or batch-edited using lambda-loops. Once stored, the contents of bach.roll can be dynamically edited and auditioned using CATART for playback. The results can be output as a sequence for synthesis, or used for CAC score-generation through a process termed Corpus-Based Transcription: rhythms are output with bach.quantize and further edited in bach.roll before export as a MUSICXML file to a notation program to produce a performer-readable score. Together these techniques look toward a concatenative DAW with promising capabilities for composers, improvisers, installation artists, and performers
Recommended from our members
Instrumental Radiation Patterns as Models for Corpus-Based Spatial Sound Synthesis: Cosmologies for Piano and 3D Electronics
The Cosmologies project aims to situate the listener inside a virtual grand piano by enabling computer processes to learn from the spatial presence of the live instrument and performer. We propose novel techniques that leverage mea- surements of natural acoustic phenomena to inform spatial sound composition and synthesis. Measured radiation pat- terns of acoustic instruments are applied interactively in response to a live input to synthesize spatial forms in real time. We implement this with software tools for the first time connecting audio descriptor analysis and corpus-based syn- thesis to spatialization using Higher-Order Ambisonics and machine learning. The resulting musical work, Cosmologies for piano and 3D electronics, explodes the space inside the grand piano out to the space of the concert hall, allowing the listener to experience its secret inner life
WW domain-mediated interaction with Wbp2 is important for the oncogenic property of TAZ
The transcriptional co-activators YAP and TAZ are downstream targets inhibited by the Hippo tumor suppressor pathway. YAP and TAZ both possess WW domains, which are important protein–protein interaction modules that mediate interaction with proline-rich motifs, most commonly PPXY. The WW domains of YAP have complex regulatory roles as exemplified by recent reports showing that they can positively or negatively influence YAP activity in a cell and context-specific manner. In this study, we show that the WW domain of TAZ is important for it to transform both MCF10A and NIH3T3 cells and to activate transcription of ITGB2 but not CTGF, as introducing point mutations into the WW domain of TAZ (WWm) abolished its transforming and transcription-promoting ability. Using a proteomic approach, we discovered potential regulatory proteins that interact with TAZ WW domain and identified Wbp2. The interaction of Wbp2 with TAZ is dependent on the WW domain of TAZ and the PPXY-containing C-terminal region of Wbp2. Knockdown of endogenous Wbp2 suppresses, whereas overexpression of Wbp2 enhances, TAZ-driven transformation. Forced interaction of WWm with Wbp2 by direct C-terminal fusion of full-length Wbp2 or its TAZ-interacting C-terminal domain restored the transforming and transcription-promoting ability of TAZ. These results suggest that the WW domain-mediated interaction with Wbp2 promotes the transforming ability of TAZ
- …